Brain and Language
○ Elsevier BV
All preprints, ranked by how well they match Brain and Language's content profile, based on 11 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Chacon, D. A.; Shrestha, S.; Dillon, B. W.; Bhatt, R.; Almeida, D.; Marantz, A.
Show abstract
At first glance, the brains language network appears to be universal, but languages clearly differ. How does the brain adapt to the specific details of individual grammatical systems? Here, we present an MEG study on case and agreement in Hindi and Nepali. Both languages use split-ergative case systems. However, these systems interact with verb agreement differently - in Hindi, case features conspire to determine which noun phrase (NP) the verb agrees with (subject, object, or neither), but in Nepali the verb always agrees with the subject NP. We found that left inferior frontal and left anterior temporal regions are sensitive to case features in both languages. Across case configurations, these same brain areas in Hindi participants show different patterns of activity for sentences that require masculine vs. feminine marking on the verb, before the comprehenders encounter it. Additionally, the left temporoparietal junction in Hindi shows different activity for subject and object agreement configurations. Both findings are not observed in Nepali participants. We suggest that this brain response demonstrates a unique-to-Hindi selection of an agreement controller and pre-encoding of the verbs morphological features. This shows that brain activity reflects psycholinguistic processes that are intimately tied to grammatical features. HighlightsO_LIThe left inferior frontal lobe and the left anterior temporal lobe distinguish accusative objects versus bare object NPs in Hindi and Nepali, and pre-emptively encode gender agreement features in Hindi. C_LIO_LIThe left inferior parietal lobe shows a differential sensitivity to object-agreement and subject-agreement constructions in Hindi that is absent in Nepali C_LIO_LIMEG can reveal differences in neural activity that reflect specific requirements of different grammatical systems C_LI
Yu, X.; Mancha, S.; Tian, X.; Lau, E.
Show abstract
Although psycho-/neuro-linguistics has assumed a distinction between morphological and syntactic structure building as in traditional theoretical linguistics, this distinction has been increasingly challenged by theoretical linguists in recent years. Opposing a sharp, lexicalist distinction between morphology and syntax, non-lexicalist theories propose common morpho-syntactic structure building operations that cut across the realms of "morphology" and "syntax", which are considered distinct territories in lexicalist theories. Taking advantage of two pairs of contrasts in Mandarin Chinese with desirable linguistic properties, namely compound vs. simplex nouns (the "morphology" contrast, differing in morphological structure complexity per lexicalist theories) and separable vs. inseparable verbs (the "syntax" contrast, differing in syntactic structure complexity per lexicalist theories), we report one of the first pieces of evidence for shared neural responses for morphological and syntactic structure complexity in language comprehension, supporting a non-lexicalist view where shared neural computations are employed across morpho-syntactic structure building. Specifically, we observed that the two contrasts both modulated neural responses in left anterior and centro-parietal electrodes in an a priori 275:400 ms time window, corroborated by topographical similarity analyses. These results serve as preliminary yet prima facie evidence towards shared neural computations across morphological and syntactic structure building in language comprehension.
Mathew, A. M.; Muralikrishnan, R.; Gulati, M.; Bhattamishra, S.; Choudhary, K. K.
Show abstract
Ergativity marks subject arguments as agents of a transitive event and thereby signals verbal transitivity and influences language comprehension. We report here on an event-related brain potentials (ERP) study in Hindi, in which we investigated this interconnection to ascertain whether the ergative case as a processing cue and its ERP correlates can be generalized across and within ergative languages. The case marking on the subject argument (ergative or nominative case) in our study either matched or mismatched with the transitivity of the light verb (transitive or intransitive) in compound light verb constructions. Ergative case violations due to an intransitive light verb evoked an N400 effect, whereas nominative case violations due to a transitive light verb elicited a P600 effect. The results reveal neurophysiological differences in the processing of ergative and nominative case alignment modulated by the transitivity of the light verbs. The findings highlight the need for cross-linguistic research to aim beyond universality and elucidate the mechanism underlying the processing of language-specific structural variations.
Shalpoush, J.; Gallagher, D.; Yamada, E.; Ohta, S.
Show abstract
Background/ObjectivesDespite notable advances in the neural mechanisms of second-language (L2) processing, few studies have systematically compared syntactic, semantic, and phonological processing of L2 within a single experimental design. We investigate the neurocognitive mechanisms underlying L2 sentence processing in native Japanese speakers with intermediate English proficiency. By integrating behavioral measures and event-related potentials (ERPs), we examined how syntactic, semantic, and phonological information influenced sentence comprehension. MethodsTwenty-seven participants completed an auditory sentence judgment task involving English sentences with a syntactic, semantic, or phonological error. ResultsBehavioral results revealed the highest accuracy in the control and semantic conditions, while syntactic and phonological violations led to significantly lower performance, indicating greater processing difficulty in these domains. Among the three linguistic violation types, phonological violations elicited the robust ERP negativities across both time 300-500 ms and 500-800 ms time windows, while syntactic and semantic violations evoked less consistent neural responses in this L2 auditory sentence judgment task. These results suggest that mismatches in expected phonological forms hinder lexical activation, triggering a negativity that resembles an N400 but reflects different underlying processes. ConclusionWe found non-canonical neural response patterns in L2 learners, characterized by sensitivity to phonological anomalies but minimal neural disruption for semantic or syntactic anomalies. The current study contributes to our understanding of L2 sentence processing in native Japanese speakers, particularly by aligning real-time neural responses with behavioral performance. This work offers implications for pedagogical practices and assessment strategies tailored to neurodiverse bilingual populations.
Harding, E. E.; Sammler, D.; Kotz, S. A.
Show abstract
Considerable debate surrounds syntactic processing similarities in language and music. Yet few studies have investigated how syntax interacts with meter considering that metrical regularity varies across domains. Furthermore, there are reports on individual differences in syntactic and metrical structure processing in music and language. Thus, a direct comparison of individual variation in syntax and meter processing across domains is warranted. In a behavioral (Experiment 1) and EEG study (Experiment 2), participants engaged in syntactic processing tasks with sentence- and melody stimuli that were more or less metrically regular, and followed a preferred or non-preferred (but correct) syntactic structure. We further employed a range of cognitive diagnostic tests, parametrically indexed verbal- and musical abilities using a principal component analysis, and correlated cognitive factors with the behavioral and ERP results (Experiment 3). Based on previous results in the language domain, we expected that a regular meter would facilitate the syntactic integration of non-preferred syntax. While syntactic discrimination was better in regular than irregular meter conditions in both domains (Experiment 1), a P600 effect indicated different integration costs during the processing of syntactic complexities in the two domains (Experiment 2). Metrical regularity altered the P600 response to preferred syntax in language while it modulated non-preferred syntax processing in music. Moreover, experimental results yielded within-domain individual differences, and identified continuous metrics of musical ability more beneficial than grouping musicians or non-musicians (Experiment 3). These combined results suggest that the meter-syntax interface differs uniquely in how it forms syntactic preferences in language and music.
te Rietmolen, N.; El Yagoubi, R.; Astesano, C.
Show abstract
French accentuation is held to belong to the level of the phrase. Consequently French is considered a language without accent with speakers that are deaf to stress. Recent ERP-studies investigating the French initial accent (IA) however demonstrate listeners to not only discriminate between different stress patterns, but also expect words to be marked with IA early in the process of speech comprehension. Still, as words were presented in isolation, it remains unclear whether the preference applied to the lexical or to the phrasal level. In the current ERP-study, we address this ambiguity and manipulate IA on words embedded in a sentence. Furthermore, we orthogonally manipulate semantic congruity to investigate the interplay between accentuation and later speech processing stages. Results reveal an early fronto-centrally located negative deflection when words are presented without IA, indicating a general dispreference for words presented without IA. Additionally, we found an effect of semantic congruity in the centro-parietal region (the traditional region for N400), which was bigger for words without IA than for words with IA. Furthermore, we observed an interaction between metrical structure and semantic congruity such that {+/-}IA continued to modulate N400 amplitude fronto-centrally, but only in the sentences that were semantically incongruent. The results indicate that presenting word without initial accent hinders semantic conflict resolution. This interpretation is supported by the behavioral data which show that participants were slower and made more errors words had been presented without IA. As participants attended to the semantic content of the sentences, the finding underlines the automaticity of stress processing and indicates that IA may be encoded at a lexical level where it facilitates semantic processing.
Matchin, W.; Almeida, D.; Hickok, G.; Sprouse, J.
Show abstract
In principle, functional neuroimaging provides uniquely informative data in addressing linguistic questions, because it can indicate distinct processes that are not apparent from behavioral data alone. This could involve adjudicating the source of unacceptability via the different patterns of elicited brain responses to different ungrammatical sentence types. However, it is difficult to interpret brain activations to syntactic violations. Such responses could reflect processes that have nothing intrinsically related to linguistic representations, such as domain-general executive function abilities. In order to facilitate the potential use of functional neuroimaging methods to identify the source of different syntactic violations, we conducted an fMRI experiment to identify the brain activation maps associated with two distinct syntactic violation types: phrase structure (created by inverting the order of two adjacent words within a sentence) and subject islands (created by extracting a wh-phrase out of an embedded subject). The comparison of these violations to control sentences surprisingly showed no indication of a generalized violation response, with almost completely divergent activation patterns. Phrase structure violations seemingly activated regions previously implicated in verbal working memory and structural complexity in sentence processing, whereas the subject islands appeared to activate regions previously implicated in conceptual-semantic processing, broadly defined. We review our findings in the context of previous research on syntactic and semantic violations using event- related potentials. Although our results suggest potentially distinct underlying mechanisms underlying phrase structure and subject island violations, our results are tentative and suggest important methodological considerations for future research in this area.
Molinaro, N.; Nara, S.; Carreiras, M.
Show abstract
Is language selection in balanced bilinguals decodable from neural activity? Previous research employing various neuroimaging methods has not yielded a conclusive answer to this issue. However, direct brain stimulation studies in bilinguals have detected different brain regions related to language production in separate languages. In the present MEG study, we addressed this question in a group of proficient Spanish-Basque bilinguals (N=45), who performed two tasks (picture naming and word reading). They were asked to name the line drawing or read the word out loud, either in Basque or Spanish, if the ink turned from black to green after one second (randomly, in 10% of trials). Sensor-level evoked activity was similar and could not be differentiated for the two languages in either task. Crucially however, decoding analyses classified the language used in both tasks, starting [~]100 ms after stimulus onset. Searchlight analyses revealed that activity detected in the right occipital-temporal sensors contributed the most to language decoding in the picture naming task, while the left occipital-temporal sensors contributed the most to decoding in word reading. The cross-task decoding analysis highlighted robust generalization effects from the picture naming to the word reading task in a later time interval. The present findings bridge the gap between non-invasive and invasive experimental evidence on bilingual lexical processing and provide novel evidence about the role of the two hemispheres in activating each language for picture naming and word reading.
Contier, F.; Weymar, M.; Wartenburger, I.; Rabovsky, M.
Show abstract
The functional significance of the two prominent language-related ERP components N400 and P600 is still under debate. It has recently been suggested that one important dimension along which the two vary is in terms of automaticity versus attentional control, with N400 amplitudes reflecting more automatic and P600 amplitudes reflecting more controlled aspects of sentence comprehension. The availability of executive resources necessary for controlled processes depends on sustained attention, which fluctuates over time. Here, we thus tested whether P600 and N400 amplitudes depend on the level of sustained attention. We re-analyzed EEG and behavioral data from a sentence processing task by Sassenhagen & Bornkessel-Schlesewsky (2015, Cortex), which included sentences with morphosyntactic and semantic violations. Participants read sentences phrase by phrase and indicated whether a sentence contained any type of anomaly as soon as they had the relevant information. To quantify the varying degree of sustained attention, we extracted a moving reaction time coefficient of variation over the entire course of the task. We found that the P600 amplitude was significantly larger during periods of low reaction time variability (high sustained attention) than in periods of high reaction time variability (low sustained attention). In contrast, the amplitude of the N400 was not affected by reaction time variability. These results thus suggest that the P600 component is sensitive to sustained attention while the N400 component is not, which provides independent evidence for accounts suggesting that P600 amplitudes reflect more controlled and N400 amplitudes more automatic aspects of sentence comprehension.
Landsiedel, J.; Koldewyn, K.
Show abstract
Human interactions contain potent social cues that not only meet the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues. Highlights- Novel work investigating social interaction perception in the auditory domain. - Visually defined SI-pSTS shows a heteromodal response profile to interactions. - Yet, it prefers visual to auditory stimuli. The reverse was found for anterior STS. - Temporal voice areas show qualitatively different response compared to SI-pSTS. - Future studies are needed to corroborate the unique role of right SI-pSTS.
Mengxing, L.; Wang, S.; Vidaurre, C.; Guediche, S.; Lerma-Usabiaga, G.; Paz-Alonso, P.
Show abstract
Language, a uniquely human higher-order cognitive function, has traditionally been attributed to cortical mechanisms with limited attention given to subcortical contributions. Recent advances in non-invasive neuroimaging have revealed that thalamic activity can be modulated by attention and task demands. Moreover, lesion studies have hinted at the thalamuss potential role in language processing. Nevertheless, the precise involvement of this structure in language remains unclear. Here, we argue that language-related modulations can occur as early as the sensory thalamic stage, challenging the conventional view of language preprocessing as a predominantly language function. Using functional MRI to image 40 human participants (both female and male) while processing linguistic and non-linguistic stimuli of three main language systems (reading, speech comprehension, and speech production), we demonstrate specific activation of first-order nuclei during targeted language system tasks: lateral geniculate (LGN) for reading, medial geniculate (MGN) for speech comprehension and ventrolateral (VLN) for speech production. Notably, we show linguistic versus non-linguistic stimuli exhibit functional modulations during comprehension tasks (reading and speech) of left MGN and, to a lesser extent, left LGN--in line with prior studies of lateralization of language processes. Multi-voxel classification analysis confirmed left-lateralized linguistic modulation in the MGN, but not in the LGN. Given the complexity of thalamic connectivity and its potential role in integrating sensory and cognitive processes, this work constitutes a relevant first step to further understand thalamic involvement and thalamocortical interactions in language function. Significant StatementThis is the first study to examine the involvement of first-order thalamic nuclei in three main language systems: reading, speech comprehension and production. Using fMRI, we show that the activation of sensory thalamus is modulated by the linguistic nature of the stimuli: the lateral geniculate nucleus distinguished real words from scrambled images, and the medial geniculate nucleus differentiated intelligible speech from nonsensical sounds. This modulation was left-lateralized, especially in the medial geniculate nucleus, suggesting a language-specific mechanism. Our findings challenge the cortico-centric view of language functions, demonstrating that thalamic involvement occurs at early stages of language processing. HighlightsO_LILanguage systems (reading, speech comprehension, speech production) can be used to functionally localize three main first-order thalamic nuclei (LGN, MGN, VLN); C_LIO_LILanguage systems show specific first-order thalamic nuclei engagement; C_LIO_LILanguage comprehension (reading and speech) modulates left-lateralized functional activation in corresponding first-order thalamic nuclei; C_LIO_LINo linguistic modulations were observed in terms of functional and structural connectivity. C_LI
Evans, S.; Price, C. J.; Diedrichsen, J.; Twomey, T.; Beedie, I.; Fraser, M.; MacSweeney, M.
Show abstract
Learning to read provides access to life-long educational and vocational opportunities. Some, but not all, deaf children find learning to read challenging, due to reduced access to language, whether spoken or signed. In hearing children, the ability to access and manipulate well-specified, abstract phonological representations of spoken language is important for developing strong reading skills. However, the role that phonology plays in deaf children learning to read is much less clear. Positive associations between speechreading (lip reading) and text reading have been observed in deaf and hearing children, and deaf adults, suggesting that speechreading may play a role in reading development, regardless of hearing status. Further support for this hypothesis would be provided by evidence that similar neural representations of phonology are evoked by visual speech and other language forms (auditory speech and text), and that these neural representations are related to reading proficiency. We used fMRI and Representational Similarity Analysis (RSA) to identify shared neural representations of phonology. A group of deaf adult participants (N=22), with a mixture of sign language and spoken language backgrounds and reading abilities, were presented with single lexical items as visual speech and text. Adult hearing participants (N=25) were presented with the same words, but as visual speech and auditory speech. We hypothesised that common neural representations of phonological structure of English words would be found in each group in the superior and middle temporal cortex (STC/MTC) and that these shared representations would be more similar across different language forms in better readers. Our data supported these predictions providing neurobiological evidence of the contribution of visual speech to abstract phonological representations, that relate to reading proficiency, in both deaf and hearing adults.
Kim, J.; Lee, S.; Nam, K.
Show abstract
A central question in psycholinguistics in visual word recognition is whether morphologically complex words are obligatorily decomposed into stems and affixes during visual word recognition or whether whole-word access can occur when forms are frequent and familiar. The present study investigated how morphological complexity and lexical frequency jointly shape neural responses by leveraging Korean nominal inflection, whose transparent stem-suffix structure permits a clean dissociation between base (stem) frequency and surface (whole-word) frequency. Twenty-five native Korean speakers completed a rapid event-related fMRI lexical decision task involving simple and inflected nouns that varied parametrically in both frequency measures. Representational similarity analysis (RSA) revealed robust encoding of surface frequency--but not base frequency--in the inferior frontal gyrus (IFG) pars opercularis and supramarginal gyrus (SMG), with significantly stronger correlations for inflected than simple nouns. Univariate analyses converged with this result: surface frequency selectively increased activation for inflected nouns in inferior parietal regions, whereas base frequency showed no reliable effects in any ROI. These findings challenge models positing obligatory pre-lexical decomposition, instead supporting accounts in which morphological processing is shaped by post-lexical, usage-driven lexical statistics. Taken together, our findings shed light on a distributed perspective on morphological processing, suggesting that structural and statistical factors jointly constrain access to morphologically complex forms.
Husta, C.; Meyer, A. S.; Drijvers, L.
Show abstract
Interlocutors often use the semantics of comprehended speech to inform the semantics of planned speech. Do representations of the comprehension and planning stimuli interact on a neural level? We used rapid invisible frequency tagging (RIFT) and EEG to probe the attentional distribution between spoken distractor words and target pictures in the picture-word interference (PWI) paradigm. We presented participants with auditory distractor nouns (auditory (f1); tagged at 54Hz) together with categorically related or unrelated pictures (visual (f2); tagged at 68Hz), which had to be named after a delay. RIFT elicits steady-state evoked potentials, which reflect attentional allocation to the tagged stimuli. When representations of the tagged stimuli interact, integrative effects have been observed at the intermodulation frequency resulting from an interaction of the base frequencies (f2{+/-}f1; Drijvers et al., 2021). Our results showed clear power increases at 54Hz and 68Hz during the tagging window, but no differences between related or unrelated conditions. More interestingly, we observed a larger power difference in the unrelated compared to the related condition at the intermodulation frequency (68Hz - 54Hz: 14Hz), indicating stronger interaction between the auditory and visual representations when they were unrelated. Our results go beyond standard PWI results (e.g., Burki et al., 2020) by showing that participants do not have more difficulty visually attending to the related pictures or inhibiting the related auditory distractors. Instead, processing difficulties arise when the representations of the stimuli interact, meaning that participants might be trying to prevent integration between the auditory and visual representations in the related condition. Significance statementStudying speech planning during comprehension with EEG has been difficult due to a lack of appropriate methodology. This study demonstrates that rapid invisible frequency tagging (RIFT) can explore attentional allocation to speech planning and comprehension stimuli, as well as their interaction. Our results show that the content of the speech planning and comprehension representations affects their interaction in the neural signal, which should always be considered when these processes are studied jointly. In future work, RIFT could be used to investigate speech planning and comprehension in more conversational settings, as tagging can be added to videos or speech segments. This is the first study that demonstrates that RIFT can be used together with EEG to study cognitive phenomena.
Perez-Navarro, J.; Klimovich-Gray, A.; Lizarazu, M.; Piazza, G.; Molinaro, N.; Lallier, M.
Show abstract
Cortical tracking of speech is relevant for the development of speech perception skills. However, no study to date has explored whether and how cortical tracking of speech is shaped by accumulated language experience, the central question of this study. In 35 bilingual children (6 y.o.) with considerably bigger experience in one language, we collected electroencephalography data while they listened to continuous speech in their two languages. Cortical tracking of speech was assessed at acoustic-temporal and lexico-semantic levels. Children showed more robust acoustic-temporal tracking in the least experienced language, and more sensitive cortical tracking of semantic information in the most experienced language. Additionally, and only for the most experienced language, acoustic-temporal tracking was specifically linked to phonological abilities, and lexico-semantic tracking to vocabulary knowledge. Our results indicate that accumulated linguistic experience is a relevant maturational factor for the cortical tracking of speech at different levels during early language acquisition.
Zugarramurdi, C.; Fernandez, L.; Lallier, M.; Valle-Lisboa, J.; Carreiras, M.
Show abstract
The precision of cortical tracking of auditory rhythmic stimuli in low frequency ranges coding for prosodic (delta) and syllabic (theta) amplitude modulations in the speech signal has been proposed to contribute to the development of phonological processing and reading acquisition. The present study investigates the role of low frequency cortical tracking of non-verbal auditory stimuli in reading acquisition through a longitudinal design from before the onset of formal reading instruction until one year afterwards. At time one, 40 prereading children performed a passive listening task of amplitude modulated white noise presented in the delta (2 Hz) and theta (4 Hz) frequency bands, while their neural activity was recorded via EEG. At time two, at the end of first grade, childrens reading skills were assessed. Results show significant cortical tracking in prereading children at both frequencies, with larger responses for theta than for delta rate non-verbal auditory stimuli. Importantly, only pre-reding cortical tracking measures for delta rate stimuli predicted reading acquisition one year later. These findings underscore the role of early neural synchronization to delta rate rhythmic auditory stimuli to reading acquisition and support the potential important role of early prosodic processing for the development of future reading skills.
Li, J.; Fedorenko, E.; Saygin, Z. M.
Show abstract
The visual word form area (VWFA) is an experience-dependent region in the left ventral temporal cortex (VTC) of literate adults that responds selectively to visual words. Why does it emerge in this stereotyped location? Past research shows the VWFA is preferentially connected to the left-lateralized frontotemporal language network. However, it remains unclear whether the presence of a typical language network and its connections with VTC are critical for the VWFAs emergence, and whether alternative functional architectures may support reading ability. We explored these questions in an individual (EG) born without the left superior temporal lobe but exhibiting normal reading ability. We recorded fMRI activation to visual words, objects, faces, and scrambled words in EG and neurotypical controls. We did not observe word selectivity either in EGs right homotope of the VWFA (rVWFA)--the most expected location given that EGs language network is right-lateralized--or in her spared left VWFA (lVWFA), despite typical face selectivity in both the right and left fusiform face area (rFFA, lFFA). We replicated these results across scanning sessions (5 years apart). Moreover, in contrast with the idea that the VWFA is simply part of the language network that responds to general linguistic information, no part of EGs VTC showed selectivity to higher-level linguistic processing. Interestingly, multivariate pattern analyses revealed sets of voxels in EGs rVWFA and lVWFA that showed 1) higher within- than between-category correlations for words (e.g., Words-Words>Words-Faces), and 2) higher within-category correlations for words than other categories (e.g., Words-Words>Faces-Faces). These results suggest that a typical left-hemisphere language network may be necessary for the emergence of focal word selectivity within the VTC, and that orthographic processing can be supported by a distributed neural code.
Hillis, M. E.; Kraemer, D. J. M.
Show abstract
How do novice language learners represent semantic information in their new language? The extent to which multiple languages are supported by divergent or overlapping semantic representations in bilinguals has been well-studied, but less is known about how new knowledge is integrated into established representational networks at the earliest stages of acquisition. Furthermore, examining language across modality (sign vs. speech) can provide unique insight into language unconfounded by perceptual features. We present two experiments in which hearing non-signers underwent brief training in American Sign Language (ASL) followed by fMRI scanning. Across both datasets (N=50), we use representational similarity analysis (RSA) to identify brain regions where neural patterns reflect semantic relationships between stimuli. In Study 2 (N=40) we show that multivariate neural measures of semantic representation in several frontal, temporal, and occipital regions reflect individual participant-level comprehension. These results demonstrate the role of frontal and temporal regions, especially bilateral superior temporal sulcus, in representing semantic content across language and modality in novice learners.
Reyes-Aguilar, A.; Licea-Haquet, G.; Arce, B. I.; Giordano, M.
Show abstract
Language comprehension requires sub-lexical (e.g., phonological) and lexical-semantic processing. We designed a task to compare the sub-lexical and lexical-semantic processing of verbs during functional magnetic resonance imaging (fMRI). Likewise, we were interested in the dichotomous representation of concrete-motor versus abstract-non-motor concepts, so two semantic categories of verbs were included: motor and mental. The findings support the involvement of the left dorsal stream of the perisylvian network for sub-lexical processing during the reading of pseudo-verbs and the ventral stream for lexical-semantic representation during the reading of verbs. According to the embodied or grounded cognition approach, modality-specific mechanisms, i.e.,, sensory-motor systems, and the well-established multimodal left perisylvian network contribute to semantic representation for concrete and abstract verbs. The present study detected a preferential modality-specific system for abstract-mental verbs. The visual system was recruited by mental verbs and showed functional connectivity with the right crus I/lobule VI from the cerebellum, suggesting the existence of this network to support the semantic representation of abstract concepts. These results confirm the dissociation between sub-lexical and lexical-semantic processing and provide evidence about the neurobiological basis of semantic representations for abstract verbs.
Ringer, H.; Sammler, D.; Daikoku, T.
Show abstract
Listeners implicitly use statistical regularities to segment continuous sound input into meaningful units, e.g., transitional probabilities between syllables to segment a speech stream into separate words. Implicit learning of such statistical regularities in a novel stimulus stream is reflected in a synchronisation of neural responses to the sequential stimulus structure. The present study aimed to test the hypothesis that neural tracking of the statistical stimulus structure is reduced in individuals with dyslexia who have weaker reading and spelling skills, and possibly also weaker statistical learning abilities in general, compared to healthy controls. To this end, adults with and without dyslexia were presented with continuous streams of (non-speech) tones, which were arranged into triplets, such that transitional probabilities between single tones were high within triplets and low between triplets. We found that neural tracking of the triplet structure, i.e., phase coherence of EEG response at the triplet rate relative to the tone rate, was reduced in adults with dyslexia compared to the control group. Moreover, enhanced neural tracking of the statistical structure was associated with better spelling skills. These results suggest that individuals with dyslexia have a rather broad deficit in processing structure in sound instead of a merely phonological deficit.